Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 22.378
Filter
1.
Nat Commun ; 15(1): 3116, 2024 Apr 10.
Article in English | MEDLINE | ID: mdl-38600132

ABSTRACT

Spatiotemporally congruent sensory stimuli are fused into a unified percept. The auditory cortex (AC) sends projections to the primary visual cortex (V1), which could provide signals for binding spatially corresponding audio-visual stimuli. However, whether AC inputs in V1 encode sound location remains unknown. Using two-photon axonal calcium imaging and a speaker array, we measured the auditory spatial information transmitted from AC to layer 1 of V1. AC conveys information about the location of ipsilateral and contralateral sound sources to V1. Sound location could be accurately decoded by sampling AC axons in V1, providing a substrate for making location-specific audiovisual associations. However, AC inputs were not retinotopically arranged in V1, and audio-visual modulations of V1 neurons did not depend on the spatial congruency of the sound and light stimuli. The non-topographic sound localization signals provided by AC might allow the association of specific audiovisual spatial patterns in V1 neurons.


Subject(s)
Auditory Cortex , Sound Localization , Visual Cortex , Visual Perception/physiology , Auditory Cortex/physiology , Neurons/physiology , Visual Cortex/physiology , Photic Stimulation/methods , Acoustic Stimulation/methods
2.
Proc Natl Acad Sci U S A ; 121(16): e2309975121, 2024 Apr 16.
Article in English | MEDLINE | ID: mdl-38588433

ABSTRACT

Research on attentional selection of stimulus features has yielded seemingly contradictory results. On the one hand, many experiments in humans and animals have observed a "global" facilitation of attended features across the entire visual field, even when spatial attention is focused on a single location. On the other hand, several event-related potential studies in humans reported that attended features are enhanced at the attended location only. The present experiment demonstrates that these conflicting results can be explained by differences in the timing of attentional allocation inside and outside the spatial focus of attention. Participants attended to fields of either red or blue randomly moving dots on either the left or right side of fixation with the task of detecting brief coherent motion targets. Recordings of steady-state visual evoked potentials elicited by the flickering stimuli allowed concurrent measurement of the time course of feature-selective attention in visual cortex on both the attended and the unattended sides. The onset of feature-selective attentional modulation on the attended side occurred around 150 ms earlier than on the unattended side. This finding that feature-selective attention is not spatially global from the outset but extends to unattended locations after a temporal delay resolves previous contradictions between studies finding global versus hierarchical selection of features and provides insight into the fundamental relationship between feature-based and location-based (spatial) attention mechanisms.


Subject(s)
Electroencephalography , Evoked Potentials, Visual , Humans , Evoked Potentials , Visual Fields , Attention , Photic Stimulation/methods
3.
eNeuro ; 11(4)2024 Apr.
Article in English | MEDLINE | ID: mdl-38604776

ABSTRACT

Sensory stimulation is often accompanied by fluctuations at high frequencies (>30 Hz) in brain signals. These could be "narrowband" oscillations in the gamma band (30-70 Hz) or nonoscillatory "broadband" high-gamma (70-150 Hz) activity. Narrowband gamma oscillations, which are induced by presenting some visual stimuli such as gratings and have been shown to weaken with healthy aging and the onset of Alzheimer's disease, hold promise as potential biomarkers. However, since delivering visual stimuli is cumbersome as it requires head stabilization for eye tracking, an equivalent auditory paradigm could be useful. Although simple auditory stimuli have been shown to produce high-gamma activity, whether specific auditory stimuli can also produce narrowband gamma oscillations is unknown. We tested whether auditory ripple stimuli, which are considered an analog to visual gratings, could elicit narrowband oscillations in auditory areas. We recorded 64-channel electroencephalogram from male and female (18 each) subjects while they either fixated on the monitor while passively viewing static visual gratings or listened to stationary and moving ripples, played using loudspeakers, with their eyes open or closed. We found that while visual gratings induced narrowband gamma oscillations with suppression in the alpha band (8-12 Hz), auditory ripples did not produce narrowband gamma but instead elicited very strong broadband high-gamma response and suppression in the beta band (14-26 Hz). Even though we used equivalent stimuli in both modalities, our findings indicate that the underlying neuronal circuitry may not share ubiquitous strategies for stimulus processing.


Subject(s)
Acoustic Stimulation , Auditory Perception , Electroencephalography , Gamma Rhythm , Humans , Male , Female , Gamma Rhythm/physiology , Adult , Auditory Perception/physiology , Young Adult , Photic Stimulation/methods , Visual Perception/physiology
4.
J Vis ; 24(4): 21, 2024 Apr 01.
Article in English | MEDLINE | ID: mdl-38656529

ABSTRACT

Conscious perception is preceded by long periods of unconscious processing. These periods are crucial for analyzing temporal information and for solving the many ill-posed problems of vision. An important question is what starts and ends these windows and how they may be interrupted. Most experimental paradigms do not offer the methodology required for such investigation. Here, we used the sequential metacontrast paradigm, in which two streams of lines, expanding from the center to the periphery, are presented, and participants are asked to attend to one of the motion streams. If several lines in the attended motion stream are offset, the offsets are known to integrate mandatorily and unconsciously, even if separated by up to 450 ms. Using this paradigm, we here found that external visual objects, such as an annulus, presented during the motion stream, do not disrupt mandatory temporal integration. Thus, if a window is started once, it appears to remain open even in the presence of disruptions that are known to interrupt visual processes normally. Further, we found that interrupting the motion stream with a gap disrupts temporal integration but does not terminate the overall unconscious processing window. Thus, while temporal integration is key to unconscious processing, not all stimuli in the same processing window are integrated together. These results strengthen the case for unconscious processing taking place in windows of sensemaking, during which temporal integration occurs in a flexible and perceptually meaningful manner.


Subject(s)
Motion Perception , Photic Stimulation , Unconscious, Psychology , Humans , Motion Perception/physiology , Photic Stimulation/methods , Adult , Young Adult , Male , Female , Time Factors , Attention/physiology , Contrast Sensitivity/physiology
5.
J Vis ; 24(4): 20, 2024 Apr 01.
Article in English | MEDLINE | ID: mdl-38656530

ABSTRACT

We obtain large amounts of external information through our eyes, a process often considered analogous to picture mapping onto a camera lens. However, our eyes are never as still as a camera lens, with saccades occurring between fixations and microsaccades occurring within a fixation. Although saccades are agreed to be functional for information sampling in visual perception, it remains unknown if microsaccades have a similar function when eye movement is restricted. Here, we demonstrated that saccades and microsaccades share common spatiotemporal structures in viewing visual objects. Twenty-seven adults viewed faces and houses in free-viewing and fixation-controlled conditions. Both saccades and microsaccades showed distinctive spatiotemporal patterns between face and house viewing that could be discriminated by pattern classifications. The classifications based on saccades and microsaccades could also be mutually generalized. Importantly, individuals who showed more distinctive saccadic patterns between faces and houses also showed more distinctive microsaccadic patterns. Moreover, saccades and microsaccades showed a higher structure similarity for face viewing than house viewing and a common orienting preference for the eye region over the mouth region. These findings suggested a common oculomotor program that is used to optimize information sampling during visual object perception.


Subject(s)
Fixation, Ocular , Saccades , Visual Perception , Humans , Saccades/physiology , Male , Female , Adult , Fixation, Ocular/physiology , Young Adult , Visual Perception/physiology , Photic Stimulation/methods , Pattern Recognition, Visual/physiology
6.
Opt Lett ; 49(8): 2121-2124, 2024 Apr 15.
Article in English | MEDLINE | ID: mdl-38621091

ABSTRACT

The purpose of this study is to verify the effect of anisotropic property of retinal biomechanics on vasodilation measurement. A custom-built optical coherence tomography (OCT) was used for time-lapse imaging of flicker stimulation-evoked vessel lumen changes in mouse retinas. A comparative analysis revealed significantly larger (18.21%) lumen dilation in the axial direction compared to the lateral (10.77%) direction. The axial lumen dilation predominantly resulted from the top vessel wall movement toward the vitreous direction, whereas the bottom vessel wall remained stable. This observation indicates that the traditional vasodilation measurement in the lateral direction may result in an underestimated value.


Subject(s)
Tomography, Optical Coherence , Vasodilation , Animals , Mice , Vasodilation/physiology , Tomography, Optical Coherence/methods , Photic Stimulation/methods , Retina/diagnostic imaging , Retina/physiology , Retinal Vessels/diagnostic imaging , Retinal Vessels/physiology
7.
Article in English | MEDLINE | ID: mdl-38598403

ABSTRACT

Steady-state visual evoked potential (SSVEP), one of the most popular electroencephalography (EEG)-based brain-computer interface (BCI) paradigms, can achieve high performance using calibration-based recognition algorithms. As calibration-based recognition algorithms are time-consuming to collect calibration data, the least-squares transformation (LST) has been used to reduce the calibration effort for SSVEP-based BCI. However, the transformation matrices constructed by current LST methods are not precise enough, resulting in large differences between the transformed data and the real data of the target subject. This ultimately leads to the constructed spatial filters and reference templates not being effective enough. To address these issues, this paper proposes multi-stimulus LST with online adaptation scheme (ms-LST-OA). METHODS: The proposed ms-LST-OA consists of two parts. Firstly, to improve the precision of the transformation matrices, we propose the multi-stimulus LST (ms-LST) using cross-stimulus learning scheme as the cross-subject data transformation method. The ms-LST uses the data from neighboring stimuli to construct a higher precision transformation matrix for each stimulus to reduce the differences between transformed data and real data. Secondly, to further optimize the constructed spatial filters and reference templates, we use an online adaptation scheme to learn more features of the EEG signals of the target subject through an iterative process trial-by-trial. RESULTS: ms-LST-OA performance was measured for three datasets (Benchmark Dataset, BETA Dataset, and UCSD Dataset). Using few calibration data, the ITR of ms-LST-OA achieved 210.01±10.10 bits/min, 172.31±7.26 bits/min, and 139.04±14.90 bits/min for all three datasets, respectively. CONCLUSION: Using ms-LST-OA can reduce calibration effort for SSVEP-based BCIs.


Subject(s)
Brain-Computer Interfaces , Evoked Potentials, Visual , Humans , Calibration , Photic Stimulation/methods , Electroencephalography/methods , Algorithms
8.
J Vis ; 24(4): 3, 2024 Apr 01.
Article in English | MEDLINE | ID: mdl-38558158

ABSTRACT

The sudden onset of a visual object or event elicits an inhibition of eye movements at latencies approaching the minimum delay of visuomotor conductance in the brain. Typically, information presented via multiple sensory modalities, such as sound and vision, evokes stronger and more robust responses than unisensory information. Whether and how multisensory information affects ultra-short latency oculomotor inhibition is unknown. In two experiments, we investigate smooth pursuit and saccadic inhibition in response to multisensory distractors. Observers tracked a horizontally moving dot and were interrupted by an unpredictable visual, auditory, or audiovisual distractor. Distractors elicited a transient inhibition of pursuit eye velocity and catch-up saccade rate within ∼100 ms of their onset. Audiovisual distractors evoked stronger oculomotor inhibition than visual- or auditory-only distractors, indicating multisensory response enhancement. Multisensory response enhancement magnitudes were equal to the linear sum of responses to component stimuli. These results demonstrate that multisensory information affects eye movements even at ultra-short latencies, establishing a lower time boundary for multisensory-guided behavior. We conclude that oculomotor circuits must have privileged access to sensory information from multiple modalities, presumably via a fast, subcortical pathway.


Subject(s)
Brain , Pursuit, Smooth , Humans , Reaction Time/physiology , Brain/physiology , Saccades , Memory , Photic Stimulation/methods
9.
J Vis ; 24(4): 22, 2024 Apr 01.
Article in English | MEDLINE | ID: mdl-38662347

ABSTRACT

Solving a maze effectively relies on both perception and cognition. Studying maze-solving behavior contributes to our knowledge about these important processes. Through psychophysical experiments and modeling simulations, we examine the role of peripheral vision, specifically visual crowding in the periphery, in mental maze-solving. Experiment 1 measured gaze patterns while varying maze complexity, revealing a direct relationship between visual complexity and maze-solving efficiency. Simulations of the maze-solving task using a peripheral vision model confirmed the observed crowding effects while making an intriguing prediction that saccades provide a conservative measure of how far ahead observers can perceive the path. Experiment 2 confirms that observers can judge whether a point lies on the path at considerably greater distances than their average saccade. Taken together, our findings demonstrate that peripheral vision plays a key role in mental maze-solving.


Subject(s)
Problem Solving , Saccades , Humans , Problem Solving/physiology , Saccades/physiology , Visual Fields/physiology , Maze Learning/physiology , Male , Young Adult , Psychophysics/methods , Photic Stimulation/methods , Female , Adult , Visual Perception/physiology
10.
Transl Vis Sci Technol ; 13(4): 30, 2024 Apr 02.
Article in English | MEDLINE | ID: mdl-38662401

ABSTRACT

Purpose: To determine whether light chromaticity without defocus induced by longitudinal chromatic aberration (LCA) is sufficient to regulate eye growth. Methods: An interferometric setup based on a spatial light modulator was used to illuminate the dominant eyes of 23 participants for 30 minutes with three aberration-free stimulation conditions: (1) short wavelength (450 nm), (2) long wavelength (638 nm), and (3) broadband light (450-700 nm), covering a retinal area of 12°. The non-dominant eye was occluded and remained as the control eye. Axial length and choroidal thickness were measured before and after the illumination period. Results: Axial length increased significantly from baseline for short-wavelength (P < 0.01, 7.4 ± 2.2 µm) and long-wavelength (P = 0.01, 4.8 ± 1.7 µm) light. The broadband condition also showed an increase in axial length with no significance (P = 0.08, 5.1 ± 3.5 µm). The choroidal thickness significantly decreased in the case of long-wavelength light (P < 0.01, -5.7 ± 2.2 µm), but there was no significant change after short-wavelength and broadband illumination. The axial length and choroidal thickness did not differ significantly between the test and control eyes or between the illumination conditions (all P > 0.05). Also, the illuminated versus non-illuminated choroidal zone did not show a significant difference (all P > 0.05). Conclusions: All stimulation conditions with short- and long-wavelength light and broadband light led to axial elongation and choroidal thinning. Therefore, light chromaticity without defocus induced by LCA is suggested to be insufficient to regulate eye growth. Translational Relevance: This study helps in understanding if light chromaticity alone is a sufficient regulator of eye growth.


Subject(s)
Axial Length, Eye , Choroid , Humans , Choroid/anatomy & histology , Choroid/growth & development , Choroid/radiation effects , Female , Male , Adult , Young Adult , Light , Interferometry/methods , Tomography, Optical Coherence , Photic Stimulation/methods
11.
Cereb Cortex ; 34(4)2024 Apr 01.
Article in English | MEDLINE | ID: mdl-38652553

ABSTRACT

Luminance and spatial contrast provide information on the surfaces and edges of objects. We investigated neural responses to black and white surfaces in the primary visual cortex (V1) of mice and monkeys. Unlike primates that use their fovea to inspect objects with high acuity, mice lack a fovea and have low visual acuity. It thus remains unclear whether monkeys and mice share similar neural mechanisms to process surfaces. The animals were presented with white or black surfaces and the population responses were measured at high spatial and temporal resolution using voltage-sensitive dye imaging. In mice, the population response to the surface was not edge-dominated with a tendency to center-dominance, whereas in monkeys the response was edge-dominated with a "hole" in the center of the surface. The population response to the surfaces in both species exhibited suppression relative to a grating stimulus. These results reveal the differences in spatial patterns to luminance surfaces in the V1 of mice and monkeys and provide evidence for a shared suppression process relative to grating.


Subject(s)
Mice, Inbred C57BL , Photic Stimulation , Animals , Photic Stimulation/methods , Mice , Male , Contrast Sensitivity/physiology , Visual Cortex/physiology , Neurons/physiology , Primary Visual Cortex/physiology , Species Specificity , Voltage-Sensitive Dye Imaging , Macaca mulatta
12.
Elife ; 132024 Apr 17.
Article in English | MEDLINE | ID: mdl-38629828

ABSTRACT

The presence of global synchronization of vasomotion induced by oscillating visual stimuli was identified in the mouse brain. Endogenous autofluorescence was used and the vessel 'shadow' was quantified to evaluate the magnitude of the frequency-locked vasomotion. This method allows vasomotion to be easily quantified in non-transgenic wild-type mice using either the wide-field macro-zoom microscopy or the deep-brain fiber photometry methods. Vertical stripes horizontally oscillating at a low temporal frequency (0.25 Hz) were presented to the awake mouse, and oscillatory vasomotion locked to the temporal frequency of the visual stimulation was induced not only in the primary visual cortex but across a wide surface area of the cortex and the cerebellum. The visually induced vasomotion adapted to a wide range of stimulation parameters. Repeated trials of the visual stimulus presentations resulted in the plastic entrainment of vasomotion. Horizontally oscillating visual stimulus is known to induce horizontal optokinetic response (HOKR). The amplitude of the eye movement is known to increase with repeated training sessions, and the flocculus region of the cerebellum is known to be essential for this learning to occur. Here, we show a strong correlation between the average HOKR performance gain and the vasomotion entrainment magnitude in the cerebellar flocculus. Therefore, the plasticity of vasomotion and neuronal circuits appeared to occur in parallel. Efficient energy delivery by the entrained vasomotion may contribute to meeting the energy demand for increased coordinated neuronal activity and the subsequent neuronal circuit reorganization.


Subject(s)
Brain , Cerebellum , Mice , Animals , Cerebellum/physiology , Nystagmus, Optokinetic , Neurons , Learning , Photic Stimulation/methods
13.
Acta Neurobiol Exp (Wars) ; 84(1): 1-25, 2024 Mar 28.
Article in English | MEDLINE | ID: mdl-38587328

ABSTRACT

We employed intrinsic signal optical imaging (ISOI) to investigate orientation sensitivity bias in the visual cortex of young mice. Optical signals were recorded in response to the moving light gratings stimulating ipsi­, contra­ and binocular eye inputs. ISOI allowed visualization of cortical areas activated by gratings of specific orientation and temporal changes of light scatter during visual stimulation. These results confirmed ISOI as a reliable technique for imaging the activity of large populations of neurons in the mouse visual cortex. Our results revealed that the contralateral ocular input activated a larger area of the primary visual cortex than the ipsilateral input, and caused the highest response amplitudes of light scatter signals to all ocular inputs. Horizontal gratings moved in vertical orientation induced the most significant changes in light scatter when presented contralaterally and binocularly, surpassing stimulations by vertical or oblique gratings. These observations suggest dedicated integration mechanisms for the combined inputs from both eyes. We also explored the relationship between point luminance change (PLC) of grating stimuli and ISOI time courses under various orientations of movements of the gratings and ocular inputs, finding higher cross-correlation values for cardinal orientations and ipsilateral inputs. These findings suggested specific activation of different neuronal assemblies within the mouse's primary visual cortex by grating stimuli of the corresponding orientation. However, further investigations are needed to examine this summation hypothesis. Our study highlights the potential of optical imaging as a valuable tool for exploring functional­anatomical relationships in the mouse visual system.


Subject(s)
Primary Visual Cortex , Visual Cortex , Animals , Mice , Neurons , Optical Imaging , Visual Cortex/physiology , Photic Stimulation/methods
14.
Article in English | MEDLINE | ID: mdl-38437148

ABSTRACT

In steady-state visual evoked potential (SSVEP)-based brain-computer interface (BCI) systems, traditional flickering stimulation patterns face challenges in achieving a trade-off in both BCI performance and visual comfort across various frequency bands. To investigate the optimal stimulation paradigms with high performance and high comfort for each frequency band, this study systematically compared the characteristics of SSVEP and user experience of different stimulation paradigms with a wide stimulation frequency range of 1-60 Hz. The findings suggest that, for a better balance between system performance and user experience, ON and OFF grid stimuli with a Weber contrast of 50% can be utilized as alternatives to traditional flickering stimulation paradigms in the frequency band of 1-25 Hz. In the 25-35 Hz range, uniform flicker stimuli with the same 50% contrast are more suitable. In the higher frequency band, traditional uniform flicker stimuli with a high 300% contrast are preferred. These results are significant for developing high performance and user-friendly SSVEP-based BCI systems.


Subject(s)
Brain-Computer Interfaces , Evoked Potentials, Visual , Humans , Photic Stimulation/methods , Electroencephalography/methods , Computer Systems
15.
Curr Biol ; 34(5): R195-R197, 2024 Mar 11.
Article in English | MEDLINE | ID: mdl-38471446

ABSTRACT

The representation of visual shape is a critical component of our perception of the objects around us. A new study exploited shape aftereffects to reveal the high-dimensional space of geometric features our brains use to represent shape.


Subject(s)
Form Perception , Visual Perception , Photic Stimulation/methods , Vision, Ocular , Pattern Recognition, Visual
16.
Elife ; 122024 Mar 13.
Article in English | MEDLINE | ID: mdl-38478405

ABSTRACT

Previous research has found that prolonged eye-based attention can bias ocular dominance. If one eye long-termly views a regular movie meanwhile the opposite eye views a backward movie of the same episode, perceptual ocular dominance will shift towards the eye previously viewing the backward movie. Yet it remains unclear whether the role of eye-based attention in this phenomenon is causal or not. To address this issue, the present study relied on both the functional magnetic resonance imaging (fMRI) and transcranial magnetic stimulation (TMS) techniques. We found robust activation of the frontal eye field (FEF) and intraparietal sulcus (IPS) when participants were watching the dichoptic movie while focusing their attention on the regular movie. Interestingly, we found a robust effect of attention-induced ocular dominance shift when the cortical function of vertex or IPS was transiently inhibited by continuous theta burst stimulation (cTBS), yet the effect was significantly attenuated to a negligible extent when cTBS was delivered to FEF. A control experiment verified that the attenuation of ocular dominance shift after inhibitory stimulation of FEF was not due to any impact of the cTBS on the binocular rivalry measurement of ocular dominance. These findings suggest that the fronto-parietal attentional network is involved in controlling eye-based attention in the 'dichoptic-backward-movie' adaptation paradigm, and in this network, FEF plays a crucial causal role in generating the attention-induced ocular dominance shift.


Subject(s)
Dominance, Ocular , Transcranial Magnetic Stimulation , Humans , Transcranial Magnetic Stimulation/methods , Attention/physiology , Frontal Lobe/physiology , Parietal Lobe/physiology , Photic Stimulation/methods
17.
J Neurosci ; 44(17)2024 Apr 24.
Article in English | MEDLINE | ID: mdl-38508715

ABSTRACT

Previous studies have demonstrated that auditory cortex activity can be influenced by cross-sensory visual inputs. Intracortical laminar recordings in nonhuman primates have suggested a feedforward (FF) type profile for auditory evoked but feedback (FB) type for visual evoked activity in the auditory cortex. To test whether cross-sensory visual evoked activity in the auditory cortex is associated with FB inputs also in humans, we analyzed magnetoencephalography (MEG) responses from eight human subjects (six females) evoked by simple auditory or visual stimuli. In the estimated MEG source waveforms for auditory cortex regions of interest, auditory evoked response showed peaks at 37 and 90 ms and visual evoked response at 125 ms. The inputs to the auditory cortex were modeled through FF- and FB-type connections targeting different cortical layers using the Human Neocortical Neurosolver (HNN), which links cellular- and circuit-level mechanisms to MEG signals. HNN modeling suggested that the experimentally observed auditory response could be explained by an FF input followed by an FB input, whereas the cross-sensory visual response could be adequately explained by just an FB input. Thus, the combined MEG and HNN results support the hypothesis that cross-sensory visual input in the auditory cortex is of FB type. The results also illustrate how the dynamic patterns of the estimated MEG source activity can provide information about the characteristics of the input into a cortical area in terms of the hierarchical organization among areas.


Subject(s)
Acoustic Stimulation , Auditory Cortex , Evoked Potentials, Visual , Magnetoencephalography , Photic Stimulation , Humans , Auditory Cortex/physiology , Magnetoencephalography/methods , Female , Male , Adult , Photic Stimulation/methods , Evoked Potentials, Visual/physiology , Acoustic Stimulation/methods , Models, Neurological , Young Adult , Evoked Potentials, Auditory/physiology , Neurons/physiology , Brain Mapping/methods
18.
J Neurosci ; 44(17)2024 Apr 24.
Article in English | MEDLINE | ID: mdl-38438256

ABSTRACT

Recognizing faces regardless of their viewpoint is critical for social interactions. Traditional theories hold that view-selective early visual representations gradually become tolerant to viewpoint changes along the ventral visual hierarchy. Newer theories, based on single-neuron monkey electrophysiological recordings, suggest a three-stage architecture including an intermediate face-selective patch abruptly achieving invariance to mirror-symmetric face views. Human studies combining neuroimaging and multivariate pattern analysis (MVPA) have provided convergent evidence of view selectivity in early visual areas. However, contradictory conclusions have been reached concerning the existence in humans of a mirror-symmetric representation like that observed in macaques. We believe these contradictions arise from low-level stimulus confounds and data analysis choices. To probe for low-level confounds, we analyzed images from two face databases. Analyses of image luminance and contrast revealed biases across face views described by even polynomials-i.e., mirror-symmetric. To explain major trends across neuroimaging studies, we constructed a network model incorporating three constraints: cortical magnification, convergent feedforward projections, and interhemispheric connections. Given the identified low-level biases, we show that a gradual increase of interhemispheric connections across network-layers is sufficient to replicate view-tuning in early processing stages and mirror-symmetry in later stages. Data analysis decisions-pattern dissimilarity measure and data recentering-accounted for the inconsistent observation of mirror-symmetry across prior studies. Pattern analyses of human fMRI data (of either sex) revealed biases compatible with our model. The model provides a unifying explanation of MVPA studies of viewpoint selectivity and suggests observations of mirror-symmetry originate from ineffectively normalized signal imbalances across different face views.


Subject(s)
Facial Recognition , Humans , Male , Female , Facial Recognition/physiology , Adult , Neuroimaging/methods , Photic Stimulation/methods , Models, Neurological , Visual Cortex/physiology , Visual Cortex/diagnostic imaging , Magnetic Resonance Imaging/methods , Young Adult
19.
Cortex ; 173: 339-354, 2024 Apr.
Article in English | MEDLINE | ID: mdl-38479348

ABSTRACT

Studies using frequency-tagging in electroencephalography (EEG) have dramatically increased in the past 10 years, in a variety of domains and populations. Here we used Fast Periodic Visual Stimulation (FPVS) combined with an oddball design to explore visual word recognition. Given the paradigm's high sensitivity, it is crucial for future basic research and clinical application to prove its robustness across variations of designs, stimulus types and tasks. This paradigm uses periodicity of brain responses to measure discrimination between two experimentally defined categories of stimuli presented periodically. EEG was recorded in 22 adults who viewed words inserted every 5 stimuli (at 2 Hz) within base stimuli presented at 10 Hz. Using two discrimination levels (deviant words among nonwords or pseudowords), we assessed the impact of relative frequency of item repetition (set size or item repetition controlled for deviant versus base stimuli), and of the orthogonal task (focused or deployed spatial attention). Word-selective occipito-temporal responses were robust at the individual level (significant in 95% of participants), left-lateralized, larger for the prelexical (nonwords) than lexical (pseudowords) contrast, and stronger with a deployed spatial attention task as compared to the typically used focused task. Importantly, amplitudes were not affected by item repetition. These results help understanding the factors influencing word-selective EEG responses and support the validity of FPVS-EEG oddball paradigms, as they confirm that word-selective responses are linguistic. Second, they show its robustness against design-related factors that could induce statistical (ir)regularities in item rate. They also confirm its high individual sensitivity and demonstrate how it can be optimized, using a deployed rather than focused attention task, to measure implicit word recognition processes in typical and atypical populations.


Subject(s)
Brain , Electroencephalography , Adult , Humans , Photic Stimulation/methods , Brain/physiology , Attention , Linguistics
20.
Cogn Res Princ Implic ; 9(1): 17, 2024 Mar 26.
Article in English | MEDLINE | ID: mdl-38530617

ABSTRACT

Previous work has demonstrated similarities and differences between aerial and terrestrial image viewing. Aerial scene categorization, a pivotal visual processing task for gathering geoinformation, heavily depends on rotation-invariant information. Aerial image-centered research has revealed effects of low-level features on performance of various aerial image interpretation tasks. However, there are fewer studies of viewing behavior for aerial scene categorization and of higher-level factors that might influence that categorization. In this paper, experienced subjects' eye movements were recorded while they were asked to categorize aerial scenes. A typical viewing center bias was observed. Eye movement patterns varied among categories. We explored the relationship of nine image statistics to observers' eye movements. Results showed that if the images were less homogeneous, and/or if they contained fewer or no salient diagnostic objects, viewing behavior became more exploratory. Higher- and object-level image statistics were predictive at both the image and scene category levels. Scanpaths were generally organized and small differences in scanpath randomness could be roughly captured by critical object saliency. Participants tended to fixate on critical objects. Image statistics included in this study showed rotational invariance. The results supported our hypothesis that the availability of diagnostic objects strongly influences eye movements in this task. In addition, this study provides supporting evidence for Loschky et al.'s (Journal of Vision, 15(6), 11, 2015) speculation that aerial scenes are categorized on the basis of image parts and individual objects. The findings were discussed in relation to theories of scene perception and their implications for automation development.


Subject(s)
Eye Movements , Visual Perception , Humans , Photic Stimulation/methods , Automation , Records
SELECTION OF CITATIONS
SEARCH DETAIL
...